Image segmentation is a fundamental problem in biomedical image analysis.Recent advances in deep learning have achieved promising results on manybiomedical image segmentation benchmarks. However, due to large variations inbiomedical images (different modalities, image settings, objects, noise, etc),to utilize deep learning on a new application, it usually needs a new set oftraining data. This can incur a great deal of annotation effort and cost,because only biomedical experts can annotate effectively, and often there aretoo many instances in images (e.g., cells) to annotate. In this paper, we aimto address the following question: With limited effort (e.g., time) forannotation, what instances should be annotated in order to attain the bestperformance? We present a deep active learning framework that combines fullyconvolutional network (FCN) and active learning to significantly reduceannotation effort by making judicious suggestions on the most effectiveannotation areas. We utilize uncertainty and similarity information provided byFCN and formulate a generalized version of the maximum set cover problem todetermine the most representative and uncertain areas for annotation. Extensiveexperiments using the 2015 MICCAI Gland Challenge dataset and a lymph nodeultrasound image segmentation dataset show that, using annotation suggestionsby our method, state-of-the-art segmentation performance can be achieved byusing only 50% of training data.
展开▼